Independent research shows that X's (Twitter's) algorithm can influence political polarisation
A US research team has devised a method that uses a browser extension to alter the algorithm of X (formerly Twitter) to study its impact on user behaviour. In a 10-day experiment with 1,256 volunteers during the 2024 US presidential campaign, they used the method to vary the content expressing anti-democratic attitudes and partisan hostility. According to the authors, the results—published in Science—provide causal evidence that greater or lesser exposure to this type of content alters polarisation in the same way. Their conclusions contradict previous research published in the same journal, which found no such relationship on Facebook or Instagram. In that case, the study was conducted in collaboration with and funded by Meta.
251127 polarización daniel EN
Daniel Gayo Avello
Full Professor at the University of Oviedo in the area of “Computer Languages and Systems”
I think the work is excellent and presents a tremendously astute methodological innovation for studying the effects of algorithmic curation [selection] on social media without the cooperation (or approval) of the platforms themselves. This innovation (using a browser plugin to perform client-side reranking) allows them to overcome a challenge that until now seemed insurmountable: modifying the feed consumed by social media users. This approach allowed them to alter the content seen by the test subjects in real time and also obtain feedback from them. For that reason alone, for opening the door to future independent research without the cooperation of social media platforms, it would already be a very important piece of work.
On the other hand, the experimental design was rigorous in that it was pre-registered and carried out in a real context, i.e., with real users during an election campaign. The sample size was also considerable, and appropriate measures were taken to ensure that the participants were indeed US citizens eligible to vote in those elections. This combination of methodological innovation, experimental robustness and transparency in the conduct of the research makes the work credible and a solid contribution.
The work directly addresses the available evidence, integrating it into the existing literature, notably the studies by Bakshy, Messing and Adamic (2015), Bail et al. (2018) and Altay, Hoes and Wojcieszak (2025). However, the most interesting aspect of this article is the way in which it directly challenges the results of the study carried out by a large team of researchers in collaboration with Meta to analyse the impact of Facebook and Instagram on elections (Guess et al., 2023). While Guess et al. argued that changing the feed (chronological ordering vs. algorithmic curation) had no significant impact on user polarisation, the new work shows that algorithmic decisions do have a substantial impact when the presence of content promoting anti-democratic attitudes or animosity towards opponents is significantly reduced (or increased). Moreover, the new study shows a clear causal relationship, such that altering exposure to such content also alters emotions and perceptions regarding the opposing political group.
The implications are clear:
- it appears to contradict the results of a study that had the approval of the platform under investigation;
- its methodology opens the door to external audits and replicable experiments, not only by researchers but also by journalists or public administrations;
- it clearly establishes a causal relationship and quantifies the magnitude of the effect;
- by linking problematic content with greater engagement, it points to a powerful incentive for platforms not to want to mitigate polarisation;
- future studies using similar methodologies could favour the establishment of regulatory policies aimed at designing algorithms that not only optimise such engagement but also minimise undesirable impacts on society.
Of course, the study has several important limitations that must be taken into account. First, its scope is restricted to a very specific context: X/Twitter users during a highly polarised election period in the US and over a fairly short interval. This raises questions about the generalisability of the results to other platforms, times, or different cultural and political contexts. Furthermore, the need to install a browser plugin introduces potential biases, as only users who consumed X/Twitter via the web (rather than the app) and who were willing to alter their user experience participated, which may not reflect the general population.
Finally, although the effects detected are significant, the duration of these effects and their real impact on voting or other forms of citizen participation are unclear. This does not invalidate the study; it simply implies that further research is needed on more platforms, in different contexts and countries, as well as longitudinal studies that also measure the impact on other metrics, such as institutional trust, participation or the quality of democratic discourse, for which multidisciplinary teams would be required.
251127 polarización walter EN
Walter Quattrociocchi
Director of the Laboratory of Data and Complexity for Society at the University of Rome La Sapienza (Italy)
This is a strong and timely study. The authors manage to do something that until now has been almost unattainable: they run a real field experiment on the ranking algorithm of X without needing permission from the platform. And the result is surprisingly clean. When the feed amplifies hostile, emotionally aggressive political content, people become colder toward the opposing side; when that content is pushed down, they warm up. A two-degree shift on the “feeling thermometer” might look small, but in polarization research it is meaningful — roughly equivalent to three years of natural change.
What matters here is not the generic idea that “algorithms polarize us.” The evidence is more surgical. It is the systematic amplification of a specific category of content — politically hostile, antidemocratic, emotionally charged — that nudges users toward higher affective polarization. This aligns very well with what we observed years ago in The Echo Chamber Effect on Social Media, where interaction patterns and content dynamics reinforce emotional distance more than ideological disagreement itself. In this sense, the new study helps reconcile the mixed evidence from previous large-scale experiments: interventions that simply adjust ideological exposure often do little, whereas interventions that target animosity have a measurable impact.
Naturally, some caution is needed. The experiment occurs in the most heated phase of the 2024 US election, among users with feeds already dense with political material, and the effects are measured in the short term. These conditions amplify emotional sensitivity, so the magnitude of the impact should not be overgeneralized. But the causal mechanism is convincing: by selecting which emotions are amplified, the ranking layer shapes how citizens feel about the opposing side.
And this raises the broader point. When online environments optimize for attention rather than understanding, they transform familiarity, fluency, and emotional resonance into a surrogate for knowledge. This is exactly the phenomenon my colleagues and I call Epistemia — the shift from information that is evaluated to information that merely appears true because the system reinforces it. In this sense, studies like this one are crucial: they show that the architecture of the feed does not only decide what we see, but also what we end up believing we know.
I take the opportunity to share our recent PNAS paper introducing the concept of Epistemia —when systems move from filtering to generating information, linguistic plausibility can override processes of verification—, which situates this problem within a broader transformation of the online information ecosystem.
Piccardi et al.
- Research article
- Peer reviewed
- Experimental study
- People